java neural network
Forum  |  Blog  |  Wiki  
Get Java Neural Network Framework Neuroph at SourceForge.net. Fast, secure and Free Open Source software downloads
      

Concrete Compressive Strength Test

An example of a multivariate data type classification problem using Neuroph

by Kostic Lazar, Faculty of Organisation Sciences, University of Belgrade

an experiment for Intelligent Systems course

 

Introduction

An Artificial Neural Network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data.

Introducing the problem

Concrete is the most important material in civil engineering. The concrete compressive strength is a highly nonlinear function of age and ingredients.
For this project, we will use Neuroph framework and Concrete Compressive Strength dataset which can be found here
This dataset contain 1030 instaces, 8 input attributes and 1 output attribute. All 9 attributes format is real number.
Given are the variable name, variable type, the measurement unit and a brief description. The concrete compressive strength is the regression problem. The order of this listing corresponds to the order of numerals along the rows of the database.

Attribute Information:

Name -- Data Type -- Measurement -- Description

Cement (component 1) -- quantitative -- kg in a m3 mixture -- Input Variable
Blast Furnace Slag (component 2) -- quantitative -- kg in a m3 mixture -- Input Variable
Fly Ash (component 3) -- quantitative -- kg in a m3 mixture -- Input Variable
Water (component 4) -- quantitative -- kg in a m3 mixture -- Input Variable
Superplasticizer (component 5) -- quantitative -- kg in a m3 mixture -- Input Variable
Coarse Aggregate (component 6) -- quantitative -- kg in a m3 mixture -- Input Variable
Fine Aggregate (component 7) -- quantitative -- kg in a m3 mixture -- Input Variable
Age -- quantitative -- Day (1~365) -- Input Variable
Concrete compressive strength -- quantitative -- MPa -- Output Variable

If you are familiar with Neuroph framework, we can continue. Otherwise, we strongly recommend you to inform about this terms before continuing

 

Prodecure of training a neural network

In order to train a neural network, there are six steps to be made:

1. Normalize the data

2. Create a Neuroph project

3. Create a training set

4. Create a neural network

5. Train the network

6. Test the network to make sure that it is trained properly

 

Step 1. Normalizing the data

Before any training of chosen dataset, it must be normalized. That means every single value in this dataset must be a number beetween 0 and 1.

In our case, we can reach this goal, by using this formula for every value in out dataset.

The value X presents value that should be normalized. In our databese we have 1030 instances and 9 attributes, so we have to normalize 1030*9=9270 values.
The value Xn presents normalized value of X
The value Xmin presents the smallest number in the column where value X is located
The value Xmax presents the largest number in the column where value X is located

 

Step 2. Creating a new Neuroph project

After normalizing our dataset we can start making our project. First, you must start Neuroph Studio. Now, to create new project select File -> New project

After selecting Neuroph Project and clicking "Next", you must enter Project name, for our example "ConcreteProject" and Project Location. We will use default project location. After that, press "Finish" button to finish creating new project.

Step 3. Creating a Training set

After creating New Project, the next step is to create Training Set. We will do that by selecting New Training Set on Training in main menu.

Then, we press "Next" and get new window where we will enter Training set name, choose type, enter number of inputs and number of outputs.

In our example, we will call this training set "TrainingSet70" because we will use 70% of our dataset. Type of Training Set will stay "Supervised" because we are using the dataset with both input and output attributes and we can find out how big our error is. We will enter 8 as a number of inputs and 1 as a number of outputs, because in our dataset we have 8 inputs and 1 output as we said before.

After pressing "Next" we must insert some data into training set table. We will do this by clicking "Load From File" button, choose our dataset and press "Load" to load chosen dataset.

We can now press "Finish" to finish importing the dataset

We will do the same thing for 30% of our dataset and we will call it TestSet30. After training the first 70%, we will use this 30% for test.

Next, we must drag TestSet30 from Training Sets to Test Sets

Now we are finished in creating Training Sets

Training attempt 1

Step 4.1 Creating a neural network

Now we are going to create our first neural network. First, we must right click our project in the "Project" window, and then click "New", then "Neural Network". You will now gain new window where you must give your neural network name and choose a Neural Network Type.
In our case, we will call first neural network "Mreza1" and choose "Multi Layer Perception" type. Multi Layer Perception is common used in this kind of problems and it seems to be very easy to use it, so we have chosen this type.

By pressing the "Next" button, we will get new window with more options for this type of problem.
First, we enter number of input and output neurons on their specific positions. The number of input neurons is 8, and for output is 1, as we said before. The new thing is hidden neurons. The previous experiences shown that the greater number of hidden neurons end with better result. For our first case, we will enter small number of hidden neurons, for example 2.
Next, we will check "Use Bias Neurons" because Bias neurons are added to neural networks to help them learn patterns. Then, for Transfer function select "Sigmoid" because it is the best solution for our king of problem, and last select "Backpropogation with Momentum" for learning rule because Backpropagation With Momentum algorithm shows a much higher rate of convergence than the Backpropagation algorithm.

After pressing "Finish", you create new Neural network and it look like this

Step 5.1 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.2 and momentum will be 0.7. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 8 iterations, but that is expected because we set the learning rate slightly high.

Step 6.1 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.029954952882278795,which is solid. In this case, in our result we haven't found some extreme values of errors. There is a few of them that are greater, but it is common. So, the conclusion is that this is solid network solution.

Training attempt 2

Step 4.2 Creating a neural network

In this attempt we will use the same neural network as we used in previous attempt.

Step 5.2 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.5 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 7 iterations.

Step 6.2 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.024233883496853004 ,which is better than previous attempt. And in this case, in our result we have found even smaller deviations ,most of them are under 0.03.There is a few of them that are grater,but no more than 0.3. So, the conclusion is that this is solid network solution, better than a previous.

Training attempt 3

Step 4.3 Creating a neural network

In this attempt we will use the same neural network as we used in two previous attempts.

Step 5.3 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.8 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised after 4 iterations.

Step 6.3 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.02371182853617191 ,which is good, but the previous attempt was slice better. But in this case, in our result we have found more extreme values of errors then in previous attempt. But even with in this case, this is solid network.

Training attempt 4

Step 4.4 Creating a neural network

Now we are going to create new neural network at the same way as we did for the first attempt. The only differences will be in the number of hidden neurons and the name of the network. This time it is going to be 6 hidden neurons and the name of this network will be "Mreza2".


The new neural network looks like this


Step 5.4 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.2 and momentum will be 0.7. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 10 iterations.

Step 6.4 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.024860661076810262,which is solid. In this case, in our result we haven't found many extreme values of errors,most of them are under 0.03.There is a few of them that are grater, but no much more. So, the conclusion is that this is solid network solution.

Training attempt 5

Step 4.5 Creating a neural network

In this attempt we will use the same neural network as we used in previous attempt.

Step 5.5 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.5 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 7 iterations.

Step 6.5 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.022772073930053607 ,which is slice better than a previous attempt and in this case, in our result we have found less extreme values of errors then in attempt before. There is a few of them that are grater and they are acceptable. So, the conclusion is that this is solid network solution.

Training attempt 6

Step 4.6 Creating a neural network

In this attempt we will use the same neural network as we used in two previous attempts.

Step 5.6 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.8 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 7 iterations.

Step 6.6 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.029243521074759186. The previous attempt was slice better. In this case, in our result we have found more extreme values of errors than in attempt before .There is a few of them that are greater than 0.3. Afterward, the conclusion is that this is also solid network solution.

Training attempt 7

Step 4.7 Creating a neural network

Now we are going to create new neural network at the same way as we did for the first attempt. The only difference will be in the number of hidden neurons and the name of Neural Network. This time it is going to be 10 hidden neurons and the name of this network will be "Mreza3".


The new neural network looks like this


Step 5.7 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.2 and momentum will be 0.7. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 10 iterations.

Step 6.7 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.03619902430698928,which is solid. In this case, in our result we haven't found some extreme values of errors. There is a few of them that are grater than total mean error, but there is only few of them. So, the conclusion is that this is solid network solution.

Training attempt 8

Step 4.8 Creating a neural network

In this attempt we will use the same neural network as we used in previous attempt.

Step 5.8 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.5 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 13 iterations.

Step 6.8 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.02721926664841973 ,which is good in generally and better than the previous attempt. In this case, in our result we have found less extreme values of errors than in attempt before. There is a few of them that are grater than total mean error, but they are not greater than 0.4. So, the conclusion is that this is solid network solution.

Training attempt 9

Step 4.9 Creating a neural network

In this attempt we will use the same neural network as we used in two previous attempts.

Step 5.9 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.8 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised 12 iterations.

Step 6.9 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.028235839602564945. The previous attempt was slice better. In this case, in our result we haven't found some extreme values of errors,most of them are small.There is a few of them that are grater. So, the conclusion is that this is solid network solution.

Training attempt 10

Step 4.10 Creating a neural network

Now we are going to create new neural network at the same way as we did for the first attempt. The only difference will be in the number of hidden neurons and the name of Neural Network. This time it is going to be 15 hidden neurons and the name of this network will be "Mreza4".


The new neural network looks like this


Step 5.10 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.2 and momentum will be 0.7. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 4 iterations.

Step 6.10 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.02983246741767826,which is solid. In this case, in our result we haven't found many extreme values of errors.There is a few of them that are grater,but not very big. So, the conclusion is that this is solid network solution.

Training attempt 11

Step 4.11 Creating a neural network

In this attempt we will use the same neural network as we used in previous attempt.

Step 5.11 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.5 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 22 iterations.

Step 6.11 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.02077704480561135, which is the best until now. In this case, in our result we haven't found many extreme values of errors,most of them are under 0.03.There is a few of them that are grater,but no more than 0.3. So, the conclusion is that this is very good network solution.

Training attempt 12

Step 4.12 Creating a neural network

In this attempt we will use the same neural network as we used in two previous attempts.

Step 5.12 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.8 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised after 17 iterations.

Step 6.12 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.031297580963363054. This is good network solution. In this case, in our result we haven't found much extreme values of errors.There is a few of them that are grater than total mean error, but it is not so bad because there is only few of them. So, the conclusion is that this is solid network solution.

Training attempt 13

Step 4.13 Creating a neural network

Now we are going to create new neural network at the same way as we did for the first attempt. The only difference will be in the number of hidden neurons and the name of Neural Network. This time it is going to be 20 hidden neurons and the name of this network will be "Mreza5".


The new neural network looks like this


Step 5.13 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.2 and momentum will be 0.7. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 8 iterations.

Step 6.13 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.02592450202585866, which is good, better than a previous attempt. In this case, in our result we have found many error that are much greater than total mean error. So, the conclusion is that this is worst network solution until now.

Training attempt 14

Step 4.14 Creating a neural network

In this attempt we will use the same neural network as we used in previous attempt.

Step 5.14 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.5 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised very quickly, after 27 iterations.

Step 6.14 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.018734970703142225, which is the best in our entire testing process. There is also very little deviation in errors, so we can conclude that this is the best network solution.

Training attempt 15

Step 4.15 Creating a neural network

In this attempt we will use the same neural network as we used in two previous attempts.

Step 5.15 Training the neural network

Now we need to train the network using the training set we have created. We select the training set and click 'Train'. A new window will open, where we need to set the learning parameters. The maximum error will be 0.01, learning rate will be 0.8 and momentum will be 0.8. Learning rate is basically the size of the 'steps' the algorithm will take when minimizing the error function in an iterative process. We click 'Train' and see what happens.

A graph will appear where we can see how the error is changing in every iteration. The error is minimised after 13 iterations.

Step 6.15 Testing the neural network

After the network is trained, we will choose "TestSet30" for testing the neural network. It is strongly recommended to select the part of dataset which haven't been trained for testing. For that reason,we will choose "TestSet30" for testing.

How we can press "Test" button to see what is going to happen

The Total Mean Square Error is 0.03176565595110113. This is worst than a previous attempt. In this case, in our result we have found some extreme values of errors, but most of them are under close the total mean error. So, the conclusion is that this is solid network solution.

Conclusion

We have created five different neural networks with different number of hidden neurons. After many testing, we came to the conclusion that the number of hidden neurons is crucial in this experiment. Also, we figured that 70% of our dataset is the best for our testing. After many testing with different parameters, the attempt number 14 show himself as a best. There was 20 hidden neurons and we worked with 70% of our dataset and another 30% we used for testing. The learning rate was 0.5 and momentum was 0.8. We entered 0.01 as a maximum error, a we came to this error after 27 iterations. After that we tested remain of this dataset (another 30%) and the total mean error was 0.01873 which was smaller than any other. After trained and tested other datasets with the same parameters, the conclusio was the same. The attempt number 14 in this table is the best.

Below is a table that summarizes this experiment. The best solution for the problem have a blue background.

Training attempt
Number of hidden neurons
Training set
Test set
Maximum error
Learning rate
Momentum
Total mean square error
Number of iterations
1
2
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.2
0.7
0.02995
8
2
2
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.5
0.8
0.02423
7
3
2
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.8
0.8
0.02371
4
4
6
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.2
0.7
0.02486
10
5
6
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.5
0.8
0.02277
7
6
6
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.8
0.8
0.02924
7
7
10
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.2
0.7
0.03620
10
8
10
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.5
0.8
0.02722
13
9
10
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.8
0.8
0.02824
12
10
15
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.2
0.7
0.02983
4
11
15
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.5
0.8
0.02078
22
12
15
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.8
0.8
0.03130
17
13
20
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.2
0.7
0.02593
8
14
20
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.5
0.8
0.01873
27
15
20
70% of instances used
30% of instances used (the remain of total instances)
0.01
0.8
0.8
0.03177
13
16
20
60% of instances used
40% of instances used (the remain of total instances)
0.01
0.5
0.8
0.02054
36
17
20
80% of instances used
20% of instances used (the remain of total instances)
0.01
0.5
0.8
0.02816
29
18
20
100% of instances used
100% of instances used
0.01
0.5
0.8
0.02395
19

DOWNLOAD

See also:
Multi Layer Perceptron Tutorial

 

      Java Get Powered      Java Get Powered                           Get Java Neural Network Framework Neuroph at SourceForge.net. Fast, secure and Free Open Source software downloads